Goto

Collaborating Authors

 ai output


Generative Augmented Inference

Lu, Cheng, Wang, Mengxin, Zhang, Dennis J., Zhang, Heng

arXiv.org Machine Learning

Data-driven operations management often relies on parameters estimated from costly human-generated labels. Recent advances in large language models (LLMs) and other AI systems offer inexpensive auxiliary data, but introduce a new challenge: AI outputs are not direct observations of the target outcomes, but could involve high-dimensional representations with complex and unknown relationships to human labels. Conventional methods leverage AI predictions as direct proxies for true labels, which can be inefficient or unreliable when this relationship is weak or misspecified. We propose Generative Augmented Inference (GAI), a general framework that incorporates AI-generated outputs as informative features for estimating models of human-labeled outcomes. GAI uses an orthogonal moment construction that enables consistent estimation and valid inference with flexible, nonparametric relationship between LLM-generated outputs and human labels. We establish asymptotic normality and show a "safe default" property: relative to human-data-only estimators, GAI weakly improves estimation efficiency under arbitrary auxiliary signals and yields strict gains whenever the auxiliary information is predictive. Empirically, GAI outperforms benchmarks across diverse settings. In conjoint analysis with weak auxiliary signals, GAI reduces estimation error by about 50% and lowers human labeling requirements by over 75%. In retail pricing, where all methods access the same auxiliary inputs, GAI consistently outperforms alternative estimators, highlighting the value of its construction rather than differences in information. In health insurance choice, it cuts labeling requirements by over 90% while maintaining decision accuracy. Across applications, GAI improves confidence interval coverage without inflating width. Overall, GAI provides a principled and scalable approach to integrating AI-generated information.


A First-Principles Based Risk Assessment Framework and the IEEE P3396 Standard

Tong, Richard J., Cortês, Marina, DeFalco, Jeanine A., Underwood, Mark, Zalewski, Janusz

arXiv.org Artificial Intelligence

Generative Artificial Intelligence (AI) is enabling unprecedented automation in content creation and decision support, but it also raises novel risks. This paper presents a first-principles risk assessment framework underlying the IEEE P3396 Recommended Practice for AI Risk, Safety, Trustworthiness, and Responsibility. We distinguish between process risks (risks arising from how AI systems are built or operated) and outcome risks (risks manifest in the AI system's outputs and their real-world effects), arguing that generative AI governance should prioritize outcome risks. Central to our approach is an information-centric ontology that classifies AI-generated outputs into four fundamental categories: (1) Perception-level information, (2) Knowledge-level information, (3) Decision/Action plan information, and (4) Control tokens (access or resource directives). This classification allows systematic identification of harms and more precise attribution of responsibility to stakeholders (developers, deployers, users, regulators) based on the nature of the information produced. We illustrate how each information type entails distinct outcome risks (e.g. deception, misinformation, unsafe recommendations, security breaches) and requires tailored risk metrics and mitigations. By grounding the framework in the essence of information, human agency, and cognition, we align risk evaluation with how AI outputs influence human understanding and action. The result is a principled approach to AI risk that supports clear accountability and targeted safeguards, in contrast to broad application-based risk categorizations. We include example tables mapping information types to risks and responsibilities. This work aims to inform the IEEE P3396 Recommended Practice and broader AI governance with a rigorous, first-principles foundation for assessing generative AI risks while enabling responsible innovation.


Your Friend Asked You a Question. Don't Copy and Paste an Answer From a Chatbot

WIRED

Your Friend Asked You a Question. Your friend came to you because they respect your knowledge and opinion, and outsourcing the answer to a machine is lazy and rude. Back in the 2010s, a website called Let Me Google That For You gained a notable amount of popularity for serving a single purpose: snark. The site lets you generate a custom link that you can send somebody who asks you a question. When they click the link, it plays an animation of the process of typing a question into Google.


The Epistemic Suite: A Post-Foundational Diagnostic Methodology for Assessing AI Knowledge Claims

Kelly, Matthew

arXiv.org Artificial Intelligence

Large Language Models (LLMs) generate fluent, plausible text that can mislead users into mistaking simulated coherence for genuine understanding. This paper introduces the Epistemic Suite, a post-foundational diagnostic methodology for surfacing the epistemic conditions under which AI outputs are produced and received. Rather than determining truth or falsity, the Suite operates through twenty diagnostic lenses, applied by practitioners as context warrants, to reveal patterns such as confidence laundering, narrative compression, displaced authority, and temporal drift. It is grounded in three design principles: diagnosing production before evaluating claims, preferring diagnostic traction over foundational settlement, and embedding reflexivity as a structural requirement rather than an ethical ornament. When enacted, the Suite shifts language models into a diagnostic stance, producing inspectable artifacts-flags, annotations, contradiction maps, and suspension logs (the FACS bundle)-that create an intermediary layer between AI output and human judgment. A key innovation is epistemic suspension, a practitioner-enacted circuit breaker that halts continuation when warrant is exceeded, with resumption based on judgment rather than rule. The methodology also includes an Epistemic Triage Protocol and a Meta-Governance Layer to manage proportionality and link activation to relational accountability, consent, historical context, and pluralism safeguards. Unlike internalist approaches that embed alignment into model architectures (e.g., RLHF or epistemic-integrity proposals), the Suite operates externally as scaffolding, preserving expendability and refusal as safeguards rather than failures. It preserves the distinction between performance and understanding, enabling accountable deliberation while maintaining epistemic modesty.


Classifying Epistemic Relationships in Human-AI Interaction: An Exploratory Approach

Yang, Shengnan, Ma, Rongqian

arXiv.org Artificial Intelligence

As AI systems become integral to knowledge - intensive work, questions arise not only about their functionality but also their epistemic roles in human - AI interaction. While HCI research has proposed various AI role typologies, it often overlooks how AI resh apes users' roles as knowledge contributors. This study examines how users form epistemic relationships with AI -- how they assess, trust, and collaborate with it in research and teaching contexts. Based on 31 interviews with academics across disciplines, we developed a five - part codebook and identified five relationship types: Instrumental Reliance, Contingent Delegation, Co - agency Collaboration, Authority Displacement, and Epistemic Abstention. These reflect variations in trust, assessment modes, tasks, and human epistemic status. Our findings show that epistemic roles are dynamic and context dependent . We argue for shifting beyond static metaphors of AI toward a more nuanced framework that captures how humans and AI co - construct knowledge, enriching HCI's understanding of the relational and normative dimensions of AI use.


Should we preserve the pre-AI internet before it is contaminated?

New Scientist

The arrival of AI chatbots marks a historical dividing line after which online material can't be completely trusted to be human-created, but how will people look back on this change? While some are urgently working to archive "uncontaminated" data from the pre-AI era, others say it is the AI outputs themselves that we need to record, so future historians can study how chatbots have evolved. Rajiv Pant, an entrepreneur and former chief technology officer at both The New York Times and The Wall Street Journal, says he sees AI as a risk to information such as news stories that form part of the historical record. "I've been thinking about this'digital archaeology' problem since ChatGPT launched, and it's becoming more urgent every month," says Pant. "Right now, there's no reliable way to distinguish human-authored content from AI-generated material at scale. For John Graham-Cumming at cybersecurity firm Cloudflare, information produced before the end of 2022, when ChatGPT launched, is akin to low-background steel. This metal, smelted before the Trinity nuclear bomb test on 16 July 1945, is prized for use in delicate scientific and medical instruments because it doesn't contain faint radioactive contamination from the atomic weapon era that creates noise in readings. Graham-Cumming has created a website called lowbackgroundsteel.ai to archive sources of data that haven't been contaminated by AI, such as a full download of Wikipedia from August 2022. Studies have already shown that Wikipedia today shows signs of huge AI input. "There's a point at which we we did everything ourselves, and then at some point we started to get augmented significantly by these chat systems," he says. "So the idea was to say – you can see it as contamination, or you can see it as a sort of a vault – you know, humans, we got to here.


Towards Uncertainty Aware Task Delegation and Human-AI Collaborative Decision-Making

Lee, Min Hun, Tok, Martyn Zhe Yu

arXiv.org Artificial Intelligence

Despite the growing promise of artificial intelligence (AI) in supporting decision-making across domains, fostering appropriate human reliance on AI remains a critical challenge. In this paper, we investigate the utility of exploring distance-based uncertainty scores for task delegation to AI and describe how these scores can be visualized through embedding representations for human-AI decision-making. After developing an AI-based system for physical stroke rehabilitation assessment, we conducted a study with 19 health professionals and 10 students in medicine/health to understand the effect of exploring distance-based uncertainty scores on users' reliance on AI. Our findings showed that distance-based uncertainty scores outperformed traditional probability-based uncertainty scores in identifying uncertain cases. In addition, after exploring confidence scores for task delegation and reviewing embedding-based visualizations of distance-based uncertainty scores, participants achieved an 8.20% higher rate of correct decisions, a 7.15% higher rate of changing their decisions to correct ones, and a 7.14% lower rate of incorrect changes after reviewing AI outputs than those reviewing probability-based uncertainty scores ($p<0.01$). Our findings highlight the potential of distance-based uncertainty scores to enhance decision accuracy and appropriate reliance on AI while discussing ongoing challenges for human-AI collaborative decision-making.


Variance reduction in output from generative AI

Xie, Yu, Xie, Yueqi

arXiv.org Artificial Intelligence

Generative AI models, such as ChatGPT, will increasingly replace humans in producing output for a variety of important tasks. While much prior work has mostly focused on the improvement in the average performance of generative AI models relative to humans' performance, much less attention has been paid to the significant reduction of variance in output produced by generative AI models. In this Perspective, we demonstrate that generative AI models are inherently prone to the phenomenon of "regression toward the mean" whereby variance in output tends to shrink relative to that in real-world distributions. We discuss potential social implications of this phenomenon across three levels-societal, group, and individual-and two dimensions-material and non-material. Finally, we discuss interventions to mitigate negative effects, considering the roles of both service providers and users. Overall, this Perspective aims to raise awareness of the importance of output variance in generative AI and to foster collaborative efforts to meet the challenges posed by the reduction of variance in output generated by AI models.


The Design Space of Recent AI-assisted Research Tools for Ideation, Sensemaking, and Scientific Creativity

Ye, Runlong, Varona, Matthew, Huang, Oliver, Lee, Patrick Yung Kang, Liut, Michael, Nobre, Carolina

arXiv.org Artificial Intelligence

Generative AI (GenAI) tools are radically expanding the scope and capability of automation in knowledge work such as academic research. AI-assisted research tools show promise for augmenting human cognition and streamlining research processes, but could potentially increase automation bias and stifle critical thinking. We surveyed the past three years of publications from leading HCI venues. We closely examined 11 AI-assisted research tools, five employing traditional AI approaches and six integrating GenAI, to explore how these systems envision novel capabilities and design spaces. We consolidate four design recommendations that inform cognitive engagement when working with an AI research tool: Providing user agency and control; enabling divergent and convergent thinking; supporting adaptability and flexibility; and ensuring transparency and accuracy. We discuss how these ideas mark a shift in AI-assisted research tools from mimicking a researcher's established workflows to generative co-creation with the researcher and the opportunities this shift affords the research community.


Integrating Generative AI in Cybersecurity Education: Case Study Insights on Pedagogical Strategies, Critical Thinking, and Responsible AI Use

Elkhodr, Mahmoud, Gide, Ergun

arXiv.org Artificial Intelligence

The rapid advancement of Generative Artificial Intelligence (GenAI) has introduced new opportunities for transforming higher education, particularly in fields that require analytical reasoning and regulatory compliance, such as cybersecurity management. This study presents a structured framework for integrating GenAI tools into cybersecurity education, demonstrating their role in fostering critical thinking, real-world problem-solving, and regulatory awareness. The implementation strategy followed a two-stage approach, embedding GenAI within tutorial exercises and assessment tasks. Tutorials enabled students to generate, critique, and refine AI-assisted cybersecurity policies, while assessments required them to apply AI-generated outputs to real-world scenarios, ensuring alignment with industry standards and regulatory requirements. Findings indicate that AI-assisted learning significantly enhanced students' ability to evaluate security policies, refine risk assessments, and bridge theoretical knowledge with practical application. Student reflections and instructor observations revealed improvements in analytical engagement, yet challenges emerged regarding AI over-reliance, variability in AI literacy, and the contextual limitations of AI-generated content. Through structured intervention and research-driven refinement, students were able to recognize AI strengths as a generative tool while acknowledging its need for human oversight. This study further highlights the broader implications of AI adoption in cybersecurity education, emphasizing the necessity of balancing automation with expert judgment to cultivate industry-ready professionals. Future research should explore the long-term impact of AI-driven learning on cybersecurity competency, as well as the potential for adaptive AI-assisted assessments to further personalize and enhance educational outcomes.